Skip to content

Conversation

@dustin12
Copy link
Contributor

Fixes two problems found in async during load testing:

  1. We should not do any blocking operations under a lock in commit_finished_items. Add items outside the lock.
  2. Timers should be skewed so they don't all fire at once.

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

Fixes three problems found in load testing:

1) We should not do any blocking operations under a lock in commit_finished_items.  Add items outside the lock.
2) Timers should be skewed so they don't all fire at once.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dustin12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses critical performance and stability issues identified during load testing of asynchronous processing within the system. The primary focus is on optimizing resource utilization and preventing contention by ensuring that blocking operations do not occur under locks and by distributing the load of timer-triggered events. These changes collectively aim to enhance the robustness and efficiency of the asynchronous DoFn, particularly under high-load scenarios.

Highlights

  • Non-blocking operations under lock: Refactored the commit_finished_items method to ensure that potentially blocking schedule_item calls are performed outside of critical sections protected by locks, improving concurrency and preventing deadlocks or performance bottlenecks.
  • Timer Skewing: Implemented a mechanism to introduce randomness into timer firing times, preventing all timers from triggering simultaneously. This 'skewing' helps distribute load and avoid 'thundering herd' issues during peak activity.
  • Improved State Coder: Updated the TO_PROCESS state specification to use FastPrimitivesCoder instead of StrUtf8Coder, allowing for more flexible and efficient serialization of primitive types.
  • Reduced Scheduling Sleep Time: Decreased the sleep duration in the schedule_item method from 1 second to 0.01 seconds, making the item scheduling process more responsive and reducing perceived latency.
  • Configurable Timer Frequency: Made the _timer_frequency parameter configurable during the initialization of AsyncWrapper, allowing for more dynamic control over timer behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@codecov
Copy link

codecov bot commented Oct 23, 2025

Codecov Report

❌ Patch coverage is 14.28571% with 12 lines in your changes missing coverage. Please review.
✅ Project coverage is 40.21%. Comparing base (db92a3a) to head (b6c566e).
⚠️ Report is 89 commits behind head on master.

Files with missing lines Patch % Lines
sdks/python/apache_beam/transforms/async_dofn.py 14.28% 12 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##             master   #36596       +/-   ##
=============================================
- Coverage     56.93%   40.21%   -16.72%     
  Complexity     3394     3394               
=============================================
  Files          1222     1222               
  Lines        186787   187021      +234     
  Branches       3545     3545               
=============================================
- Hits         106348    75214    -31134     
- Misses        77067   108435    +31368     
  Partials       3372     3372               
Flag Coverage Δ
python 40.51% <14.28%> (-40.50%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Contributor

Assigning reviewers:

R: @jrmccluskey for label python.

Note: If you would like to opt out of this review, comment assign to next reviewer.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@dustin12
Copy link
Contributor Author

Run precommit

coders.TupleCoder([coders.StrUtf8Coder(), coders.StrUtf8Coder()]),
)
_timer_frequency = 20
coders.TupleCoder(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a backwards incompatible change, since you're swapping to a different coder

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There shouldn't be any usage of this yet so I'm OK with that.

self._parallelism = parallelism
self._max_wait_time = max_wait_time
self._timer_frequency = 20
self._timer_frequency = callback_frequency
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As best I can tell, self._timer_frequency is used but self.timer_frequency_ is not. Is there any reason to have both? Same goes for all of these duped fields

sleep_time = 0.01
total_sleep = 0
while not done:
timeout = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the timeout duration could be configurable in __init__()

Comment on lines +259 to +264
def next_time_to_fire(self, key):
random.seed(key)
return (
floor((time() + self._timer_frequency) / self._timer_frequency) *
self._timer_frequency)
self._timer_frequency) + (
random.random() * self._timer_frequency)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like doing all of the work to find a round increment of _timer_frequency is wasted compute once you add the extra fuzziness of random.random() * self._timer_frequency since you're no longer on a round increment afterwards

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started just having keys setting a timer now + 10s. That doesn't work because as new work arrives the timer firing time keeps getting pushed out. ie. an element arrives at t=1, we want to check back on it at t=11 so we set the timer, but then an element arrives at t=9 and overwrites the timer to t=19.

Next setup was having this round increment firing time. so any message that arrives between t=0 and t=10 sets the timer for 0:10. That way the element at t=9 doesn't override the timer to t=19 but keeps it at t=10.

That works but means we see a spike of timers at t=10, t=20, t=30 etc. There isn't any reason the timers all need to fire at these round increments so this is attempting to add fuzzing per key (since timers are per key). Ideally this means that any 1 key has buckets 10s apart so the overwriting problem is fixed but also means that across multiple keys the buckets don't all fire at the same time. I believe this is what the random.seed(key) on line 260 is doing but correct me if I'm wrong.

Also, let me know if you know an easier way to obtain this pattern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants